Visit complete Math for Machine Learning roadmap

← Back to Topics List

Unsupervised Data Augmentation

Unsupervised Data Augmentation (UDA) makes use of both labelled data and unlabeled data and computes the loss function using standard methods for supervised learning to train the model.

Unsupervised Data Augmentation for Consistency Training

Semi-supervised learning lately has shown much promise in improving deep learning models when labeled data is scarce.

Augmentation Strategies for Different Tasks

Confidence-Based Masking

Specifically, in each minibatch, the consistency loss term is computed only on examples whose highest probability among classification categories is greater than a threshold β. β=0.8 for CIFAR-10 and SVHN and β=0.5 for ImageNet.

Sharpening Predictions

Since regularizing the predictions to have low entropy has been shown to be beneficial, predictions are sharpen when computing the target distribution on unlabeled examples by using a low Softmax temperature τ.

Learning Materials

unsupervised machine learning

unsupervised data augmentation for consistency training

Unsupervised Data Augmentation and its types

Youtube Videos

Unsupervised Data Augmentation

Data Augmentation using Pre-trained Transformer Models

Resources Community KGx AICbe YouTube

by Devansh Shukla

"AI Tamil Nadu formely known as AI Coimbatore is a close-Knit community initiative by Navaneeth with a goal to offer world-class AI education to anyone in Tamilnadu for free."